Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Sayali Khandizod, Tejaswini Patil, Atharva Dode, Varad Banale, Prof. C. D. Bawankar
DOI Link: https://doi.org/10.22214/ijraset.2022.42260
Certificate: View Certificate
Skin cancer is a major problem faced by the public and it has been on the rise since the advent of global warming. The erosion of ozone has led to the Sun’s ultraviolet rays to directly reach the earth and cause many skin diseases. Primary sources of diagnosis include external examination and biopsy, histopathological analysis and dermoscopic analysis. These methods need skilled individuals or else the accuracy of diagnosis will be very low. If the skin cancer is not detected early, it may be lethal to the patient. As the cancer is external, we can use the images captured of the lesions for analysis. We propose a seven-way skin cancer classifier algorithm based on convolutional neural networks that will give results comparable to skin specialists in the diagnosis of skin cancer from skin lesion images. Transfer learning is a tool at our disposal to improve the accuracy of our system by using pre-trained models. TheHAM10000 dataset is a collection of 10000 dermoscopic images of skin lesions which can be used to train the model. We can use the MobileNet model on the dataset to build a model. We measured the accuracy of the model as categorical, top-2 and top 3 and obtained a categorical accuracy of 85%, top-2 accuracy of 91% and top-3 accuracy of 96%. This tool can assist doctors in early detection of skin cancer.
I. INTRODUCTION
Today’s world has seen a rapid rise in deadly diseases, the most alarming of them being cancer [1]. The human body is made up of trillions of cells. Each cell goes through a process called cell division in which new cells are generated and the old ones are destroyed. However, sometimes, cells multiply even when they should not and cause lumps or tumours on the body. These tumours spread to nearby tissues and organs and may be lethal. It is very necessary to detect cancer at the earliest stage possible so that we can save lives of the affected. Cancer can form in any part of the body but skin cancer has proven to be the most harmful.
Skin cancer is one of the most rapidly growing cancers known to the human race. Fortunately, the survival rate of a skin cancer patient is 96% if the cancer is diagnosed at a very early stage. Detecting skin cancer is easier as compared to other types of cancers because it forms as an external lesion making the growth of cells visible in contrast to other internal forms of cancer. Detection of skin cancer involves a process called a biopsy wherein the tissues from the affected site are removed and tested for the presence of a disease. This requires surgical equipment and medical machinery making it a tedious and costly process. This makes the treatment out of the reach of the poor and also residents of small towns where this equipment would not be regularly available. This contradicts the need to diagnose the disease early for the patient to survive. Hence, the need to develop a solution which can give accurate results at a cheaper cost which can be made available to everyone at their convenience. The rise of research in computer vision and deep learning has opened up a lot of avenues and also made developing low-cost high accuracy solutions possible.
The presence of these features indicates but does not guarantee the lesion being cancerous. For accurate predictions, we need a combination of all these characteristics together and applying statistics and probability to get our results. But the advantage we have in skin cancer detection is that all these features are external and we can analyse the images of the lesions for the features. This makes the solution extremely feasible. We can develop an image analyser which will apply deep learning
models and statistics together and classify the images based on the predictions. This solution will be lightweight, easy to
use and available everywhere. The solution will help anyone to capture an image of the lesion growing on their skin even using a mobile phone and the model will classify the image into cancerous or non-cancerous. The result will be a probability of the image being cancerous. The user can then take appropriate action based on the result.
II. THE ABCDE’S OF SKIN CANCER DETECTION
Skin Cancer forms on the lesion anywhere on the body. Not all lesions that form on the body are cancerous. Hence it is necessary to be able to separate cancerous lesions from noncancerous ones. Specific features called the ABCDE rule help us in classifying cancerous and non-cancerous lesions. The figure 1 shows the ABCDE’s of skin cancer detection.
A. Asymmetry
If the lesions that form on the body are symmetric, they are not cancerous. A common example of this would be moles that generally people have which are circular and hence not cancerous. Cancerous lesions are not symmetric which becomes a deciding feature.
B. Border Irregularity
Observing the borders of the lesions formed can also help us in our analysis. Circular borders are a sign of a benign lesion and irregular borders mean that the lesion may be cancerous
C. Colour
Non cancerous lesions have only a single colour in their patch. Cancerous ones have more than one colour.
D. Diameter
If the diameter of a lesion is greater than 2.5 cm, the lesion can be called cancerous.
E. Evolving
A cancerous cell has the most important characteristic of multiplying uncontrollably. Hence if the lesion formed is seen to be spreading over time, there is a high probability that the lesion may be cancerous.
III. RELATED WORK
Many researchers are working in this field in recent years. They applied a range of image processing and computer vision techniques which we present. In [2], Soniya Mane and Dr. Swati Shinde present a method to classify lesions based on machine learning. This method needs the images to first be pre-processed using image processing techniques before being fed to the classifier. The image is first scanned for noise and the noise is removed by converting the image to black and white and applying a masking algorithm. The lesion is then segmented from the image by cropping out the region with lower intensity from the image. The lesion obtained is then fed to the svm classifier which will predict if the lesion is cancerous or benign. They applied two separate svm algorithms, linear and RBF and compared their results. They obtained an accuracy of 90.47% using linear SVM and 85.71% using SVM RBF Kernel. They also compare Linear SVM with a BayesNet Classifier and find that the accuracy of BayesNet is % much lower than the linear SVM Kernel.
Arslan Javaid, Muhammad Sadiq and Faraz Akram in [3] employ a combination of statistics and image processing together to improve the accuracy of their model. They apply image resizing and noise removal to obtain a large noiseless dataset. They apply mean and standard deviation to the image. Convert the image from RGB to grey and apply OTSU’s thresholding. A collection of morphological operations is performed on the image. This transformed image is then used for feature extraction. 36 colour features are extracted using three different colour schemes, namely, RGB, HSV and LAB. Histograms are used to obtain the shape features from the image. The obtained features are reduced using Principal Component Analysis (PCA). Three machine learning algorithms are compared. SVM gave an accuracy of 88.17%, Quadratic Discriminant was at 90.84% and Random Forest performed best at 93.89%.
In [4], Shahana Sherin K C and Shayini R apply mean, standard deviation variance and skewness. GLCM is a statistical texture analysis method which creates a 2D matrix which is a measure of how often a pair of pixels appear in an image. For classification, they apply a neural network with 21 neurons, 1 input layer, 1 hidden layer and 1 output layer. The neural network is fairly basic but they rely more on the statistical measures to get the correct results. They tested the system on a mere 166 dermoscopic images and claim to have obtained an accuracy of 100%.
He Huang, Pegah Kharazmi and other authors in [5] have worked on basal cell carcinoma images rather than melanoma. They have employed a deep learning algorithm called SSAE in an unsupervised model. They have used 32 high quality images of clinical patients from Vancouver Skin Care Center. The translucent region from the image is extracted and fed to the neural network and they obtained an accuracy of 93%.
Ramis ILERI, Fatma LATFOGLU and Semra ICER [6] primarily work on melanoma using image processing and deep learning techniques. They apply the same grayscale conversion and lesion extraction using intensity mapping technique to obtain the lesion from the entire image. These techniques help reduce the noise in the image. They then apply an artificial neural network model to classify the lesion. They compared various ANN’s. The Multilayer Perceptron model gave an accuracy of 99.8%, Pattern Recognition gave an accuracy of 98.3%. The accuracy of SVM using this technique was 96.7% and the accuracy of KNN stood at 95%.
In [7], Muhammad Ali Farooq, Muhammad Aatif Mobeen Azhar and Rana Hammad Raza proposed an Automatic Lesion Detection System for Skin Cancer Classification Using SVM and Neural Classifiers. They have applied sharpening filtering, noise removal, active contours, merged segmentation image processing techniques for preprocessing the image. They then extract shape features like asymmetry, compactness, variance and diameter from the lesion, texture features like GLCM and Coarseness, Colour features like variance entropy and skewness. They then apply svm and ann to classify the lesion into low risk, medium risk or high risk.
A seven-way skin cancer classifier is proposed in [8]. The authors Saket S. Chaturvedi, Kajol Gupta, Prakash. S. Prasad, work on classifying the skin lesion image into all types of skin cancers. They work on a huge HAM10000 dataset which has labelled data for seven types of skin cancers. They apply a shift and rotate to generate auxiliaries of a single image to increase the variety in the dataset. These preprocessed images are then fed on a pre-trained network called MobileNet. This transfer learning helps improve the overall accuracy of the system. The system provided a top3 accuracy of 95.84%.
IV. THE DATASET
The HAM10000 dataset is just an aggregation of different kinds of skin cancer images. We require a huge amount of data to build deep learning models that give accurate results. To solve our problem, we require a very specific collection of labelled skin lesion images. The HAM10000 dataset consists of 10015 dermoscopy images obtained from skin patients from Australia and Austria. Out of the 10015, 6705 images are noncancerous, 1113 are cancerous and 2197 images are unknown. The images come with a csv file denoting which image belongs to which kind of skin cancer. Fig 2. shows the snapshot of the HAM10000 dataset.
V. PROPOSED METHODOLOGY
A. Data Preparation
The Deep Learning approach needs the data to be accurate and flawless for it to give accurate results. Hence, the first step in any deep learning approach is preparing the data so that it is fit to be used to build a solution. We removed some blurred-out images from the training set and also images in which the skin lesion was not clear.
Once the training set and validation set is fixed, we remove the noise from the images. This includes removing air bubbles, noise and foreign entities like gels or ointments used on the lesion before taking the picture. This helps in getting a high classification rate. There are many techniques that help in noise removal from an image. Some of them include Gaussian filtering, median blurring. These techniques help remove even a small quantity of noise. A simple thresholding algorithm is utilized. It is a great challenge to filter out the best possible images out of the 10000 images to use for building the model. Data normalization is a technique that helps remove data redundancy. The algorithms used in data normalization are min-max normalization, z-score normalization and decimal scaling normalization. We applied z score normalization and any image which is below a threshold was considered redundant and removed.
B. Data Augmentation
Now, once we select the best images for building our model, we need to generate a bigger dataset to compensate for the loss of images in data reduction. We artificially increase the quantity of the best images by slightly modifying them and creating copies without having to actually collect new data. This helps the model to prevent overfitting as we artificially enhance the dataset by oversampling the best quality images. We utilized various augmentation techniques such as random cropping, rotation, colour-shifting, mirroring, etc.
C. Transfer Learning using CNN
The shortcoming of a small dataset can be overcome by using a pre-trained model. This is known as transfer learning. We generally use models which have been trained on related problems. Various pre-trained models are available like ImageNet, AlexNet, ResNet, VCG16, DenseNet, MobileNet, etc. We import the MobileNet model and tweak the final few layers with our proposed CNN model. We use a sigmoid layer at the end instead of a softmax. To get the accurate weights of each layer, we use a backtracking algorithm for fine tuning the neural network. We use Adam Optimizer as a gradient descent algorithm. Finally, the augmentation step is applied to overcome the limitations of the labelled dataset. The figure below shows the architecture of our CNN model.
There are many parameters that we use to tune up our neural network. The major ones are given below:
VI. RESULTS
The initial data exploration helped us find out the relation between the type of cancer and the data. We found out that Dermatofibroma, Basal cell Carcinoma, Meloncytic Nevi and Actinic Keratosis are very rarely found in patients below the age of 20 whereas vascular lesions and melanoma can occur at any age. Skin Cancer is commonly found in the age groups 30 to 70 where the age of 45 seems to be the age where skin cancer peaks. The extremities and torso seem to be the most prone to skin cancer.
We used 1014 images separated from the dataset using train test split to validate our trained model. The confusion matrix of the model was generated for seven classes. We then calculated the categorical accuracy for each class, the top2 accuracy and the top-3 accuracy of the model and found out that the model obtained a categorical accuracy of 85%. The model performed best for the Melanocytic Nevi class. Classifying Benign Keratosis was a challenging task because it showed features similar to Melanoma and Melanocytic Nevi. The model obtained a top-2 accuracy of 91% and a top-3 accuracy of 96%.
We developed a desktop application that loads the image and feeds it to the model for predictions.
In this paper, the proposed use of transfer learning MobileNet model achieves a better classification rate when compared to other transfer learning models. The ability of the proposed method is to classify benign and malignant skin lesions by replacing the output activation layer with sigmoid for the binary classification. Moreover, the proposed method was evaluated on a dataset named HAM in which we obtained better training, and testing accuracy, respectively; than other existing transfer learning models. In addition, the imbalanced dataset and absences of a large number of images interrupted the model to acquire better accuracy. As a result, we balanced the dataset for both levels that improves accuracy of classification. We obtained a categorical accuracy of 85% and a top-3 accuracy of 96%. We analysed the data and found out the factors affecting skin cancer. The future scope will be to integrate a patient\'s personal data and find out the relations between the data and the presence of the disease to find out the probability of the person getting affected by the disease and maybe even assist experts to start treatment even before the lesions occur, hence taking a preventive approach.
[1] Eko Handoyo, and M. Arfan. ”Ticketing Chatbot Service using Server less NLP Technology.” In 2018 5th International Conference on Information Technology, Computer and Electrical Engineering (ICITACEE), Semarang, Indonesia, 2018, pp. 325-330. [2] S. Mane and S. Shinde, ”A Method for Melanoma Skin Cancer Detection Using Dermoscopy Images,” 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), 2018, pp. 1-6, doi: 10.1109/ICCUBEA.2018.8697804. [3] A. Javaid, M. Sadiq and F. Akram, ”Skin Cancer Classification Using Image Processing and Machine Learning,” 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), 2021, pp. 439-444, doi: 10.1109/IBCAST51254.2021.9393198. [4] k. c. Shahana sherin and R. Shayini, ”Classification of Skin Lesions in Digital Images for the Diagnosis of Skin Cancer,” 2020 International Conference on Smart Electronics and Communication (ICOSEC), 2020, pp. 162-166, doi: 10.1109/ICOSEC49089.2020.9215271. [5] P. Kharazmi, M. I. AlJasser, H. Lui, Z. J. Wang and T. K. Lee, ”Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, With an Application to Basal Cell Carcinoma Classification,” in IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 6, pp. 1675-1684, Nov. 2017, doi: 10.1109/JBHI.2016.2637342. [6] R. ?Ileri, F. Latifoglu and S. ??Ic¸er, ”Artificial Neural Network Based Diagnostic System For Melanoma Skin Cancer,” 2019 Medical Technologies Congress (TIPTEKNO), 2019, pp. 1-4, doi: 10.1109/TIPTE KNO.2019.8894930. [7] M. A. Farooq, M. A. M. Azhar and R. H. Raza, ”Automatic Lesion Detection System (ALDS) for Skin Cancer Classification Using SVM and Neural Classifiers,” 2016 IEEE 16th International Conference on Bioinformatics and Bioengineering (BIBE), 2016, pp. 301-308, doi: 10.1109/BIBE.2016.53. [8] Chaturvedi, S. S., Gupta, K., & Prasad, P. S. (2020). Skin Lesion Analyser: An Efficient Seven-Way Multi-class Skin Cancer Classification Using MobileNet. Advanced Machine Learning Technologies and Applications, 165–176. doi:10.1007/978-981-15-3383-9-15 [9] R. Elghondakly, S. Moussa and N. Badr, ”Waterfall and agile requirements-based model for automated test cases generation,” 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), 2015, pp. 607-612, doi: 10.1109/Intel CIS.2015.7397285. [10] A. A. A. Jilani, A. Nadeem, T. Kim and E. Cho, ”Formal Representations of the Data Flow Diagram: A Survey,” 2008 Advanced Software Engineering and Its Applications, 2008, pp. 153-158, doi: 10.1109/ASEA.2008.34. [11] M.A. Hearst; S.T. Dumais; E. Osuna; J. Platt; B. Scholkopf, “Support vector machines”, IEEE Intelligent Systems and their Applications, July 1998. [12] Yujun Yang, Jianping Li and Yimei Yang, ”The research of the fast SVM classifier method,” 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), 2015, pp. 121-124, doi: 10.1109/IC CWAMTIP.2015.7493959. [13] S. Ghosh, A. Dasgupta and A. Swetapadma, ”A Study on Support Vector Machine based Linear and Non-Linear Pattern Classification,” 2019 International Conference on Intelligent Sustainable Systems (ICISS), 2019, pp. 24-28, doi: 10.1109/ISS1.2019.8908018. [14] Sourish Ghosh; Anasuya Dasgupta; Aleena Swetapadma, “A Study on Support Vector Machine based Linear and Nonlinear Pattern Classification”, International Conference on Intelligent Sustainable Systems (ICISS), Feb 2019.
Copyright © 2022 Sayali Khandizod, Tejaswini Patil, Atharva Dode, Varad Banale, Prof. C. D. Bawankar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET42260
Publish Date : 2022-05-05
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here